卷积神经网络已广泛应用于医学图像分割,并取得了相当大的性能。但是,性能可能会受到训练数据(源域)和测试数据(目标域)之间域间隙的显着影响。为了解决此问题,我们提出了一种基于数据操作的域泛化方法,称为域概括(AADG)的自动增强。我们的AADG框架可以有效地采样数据增强策略,从而产生新的领域并从适当的搜索空间中多样化训练集。具体而言,我们介绍了一项新的代理任务,以最大程度地提高了多个增强新颖的域之间的多样性,该域通过单位球体空间中的凹痕距离来衡量,从而使自动化的增强可牵引。对抗性训练和深入的强化学习有效地搜索了目标。全面执行了11个公开底部的底面图像数据集的定量和定性实验(四个用于视网膜血管分割,四个用于视盘和杯子和杯(OD/OC)分割(OD/OC)分割,视网膜病变细分进行了三个)。两个用于视网膜脉管系统分割的八八个数据集进一步涉及验证跨模式泛化。我们提出的AADG通过视网膜船,OD/OC和病变细分任务的相当大的利润来表现出最新的概括性能,并优于现有方法。学到的政策在经验上得到了证实为模型不平衡,并且可以很好地转移到其他模型中。源代码可在https://github.com/crazorback/aadg上找到。
translated by 谷歌翻译
Color fundus photography and Optical Coherence Tomography (OCT) are the two most cost-effective tools for glaucoma screening. Both two modalities of images have prominent biomarkers to indicate glaucoma suspected. Clinically, it is often recommended to take both of the screenings for a more accurate and reliable diagnosis. However, although numerous algorithms are proposed based on fundus images or OCT volumes in computer-aided diagnosis, there are still few methods leveraging both of the modalities for the glaucoma assessment. Inspired by the success of Retinal Fundus Glaucoma Challenge (REFUGE) we held previously, we set up the Glaucoma grAding from Multi-Modality imAges (GAMMA) Challenge to encourage the development of fundus \& OCT-based glaucoma grading. The primary task of the challenge is to grade glaucoma from both the 2D fundus images and 3D OCT scanning volumes. As part of GAMMA, we have publicly released a glaucoma annotated dataset with both 2D fundus color photography and 3D OCT volumes, which is the first multi-modality dataset for glaucoma grading. In addition, an evaluation framework is also established to evaluate the performance of the submitted methods. During the challenge, 1272 results were submitted, and finally, top-10 teams were selected to the final stage. We analysis their results and summarize their methods in the paper. Since all these teams submitted their source code in the challenge, a detailed ablation study is also conducted to verify the effectiveness of the particular modules proposed. We find many of the proposed techniques are practical for the clinical diagnosis of glaucoma. As the first in-depth study of fundus \& OCT multi-modality glaucoma grading, we believe the GAMMA Challenge will be an essential starting point for future research.
translated by 谷歌翻译
End-to-end task bots are typically learned over a static and usually limited-size corpus. However, when deployed in dynamic, changing, and open environments to interact with users, task bots tend to fail when confronted with data that deviate from the training corpus, i.e., out-of-distribution samples. In this paper, we study the problem of automatically adapting task bots to changing environments by learning from human-bot interactions with minimum or zero human annotations. We propose SL-AGENT, a novel self-learning framework for building end-to-end task bots. SL-AGENT consists of a dialog model and a pre-trained reward model to predict the quality of an agent response. It enables task bots to automatically adapt to changing environments by learning from the unlabeled human-bot dialog logs accumulated after deployment via reinforcement learning with the incorporated reward model. Experimental results on four well-studied dialog tasks show the effectiveness of SL-AGENT to automatically adapt to changing environments, using both automatic and human evaluations. We will release code and data for further research.
translated by 谷歌翻译
Medical image quality assessment (MIQA) is a vital prerequisite in various medical image analysis applications. Most existing MIQA algorithms are fully supervised that request a large amount of annotated data. However, annotating medical images is time-consuming and labor-intensive. In this paper, we propose an unsupervised anomaly-aware framework with test-time clustering for optical coherence tomography angiography (OCTA) image quality assessment in a setting wherein only a set of high-quality samples are accessible in the training phase. Specifically, a feature-embedding-based low-quality representation module is proposed to quantify the quality of OCTA images and then to discriminate between outstanding quality and non-outstanding quality. Within the non-outstanding quality class, to further distinguish gradable images from ungradable ones, we perform dimension reduction and clustering of multi-scale image features extracted by the trained OCTA quality representation network. Extensive experiments are conducted on one publicly accessible dataset sOCTA-3*3-10k, with superiority of our proposed framework being successfully established.
translated by 谷歌翻译
Weakly-supervised learning (WSL) has been proposed to alleviate the conflict between data annotation cost and model performance through employing sparsely-grained (i.e., point-, box-, scribble-wise) supervision and has shown promising performance, particularly in the image segmentation field. However, it is still a very challenging problem due to the limited supervision, especially when only a small number of labeled samples are available. Additionally, almost all existing WSL segmentation methods are designed for star-convex structures which are very different from curvilinear structures such as vessels and nerves. In this paper, we propose a novel sparsely annotated segmentation framework for curvilinear structures, named YoloCurvSeg, based on image synthesis. A background generator delivers image backgrounds that closely match real distributions through inpainting dilated skeletons. The extracted backgrounds are then combined with randomly emulated curves generated by a Space Colonization Algorithm-based foreground generator and through a multilayer patch-wise contrastive learning synthesizer. In this way, a synthetic dataset with both images and curve segmentation labels is obtained, at the cost of only one or a few noisy skeleton annotations. Finally, a segmenter is trained with the generated dataset and possibly an unlabeled dataset. The proposed YoloCurvSeg is evaluated on four publicly available datasets (OCTA500, CORN, DRIVE and CHASEDB1) and the results show that YoloCurvSeg outperforms state-of-the-art WSL segmentation methods by large margins. With only one noisy skeleton annotation (respectively 0.14%, 0.02%, 1.4%, and 0.65% of the full annotation), YoloCurvSeg achieves more than 97% of the fully-supervised performance on each dataset. Code and datasets will be released at https://github.com/llmir/YoloCurvSeg.
translated by 谷歌翻译
本文解决了当参与需求响应(DR)时优化电动汽车(EV)的充电/排放时间表的问题。由于电动汽车的剩余能量,到达和出发时间以及未来的电价中存在不确定性,因此很难做出充电决定以最大程度地减少充电成本,同时保证电动汽车的电池最先进(SOC)在内某些范围。为了解决这一难题,本文将EV充电调度问题制定为Markov决策过程(CMDP)。通过协同结合增强的Lagrangian方法和软演员评论家算法,本文提出了一种新型安全的非政策钢筋学习方法(RL)方法来解决CMDP。通过Lagrangian值函数以策略梯度方式更新Actor网络。采用双重危机网络来同步估计动作值函数,以避免高估偏差。所提出的算法不需要强烈的凸度保证,可以保证被检查的问题,并且是有效的样本。现实世界中电价的全面数值实验表明,我们提出的算法可以实现高解决方案最佳性和约束依从性。
translated by 谷歌翻译
机器学习算法使平均训练损失最小化通常由于训练数据之间相关性的贪婪开发而遭受泛化性能差,而训练数据在分配变化下并不稳定。它启发了各种域泛化作品(DG),其中一系列方法(例如因果匹配和鱼类)通过成对域操作来工作。他们需要$ o(n^2)$成对域操作,其中$ n $域通常都很昂贵。此外,尽管DG文献中的一个共同目标是学习针对域引起的虚假相关性的不变表示,但我们强调了减轻对象引起的伪造相关性的重要性。基于观察到多样性有助于减轻虚假相关性的观察,我们提出了利用确定点过程(DPP)的多样性增强了两级抽样框架(DOMI),以有效地在大量域中进行最有用的信息。我们表明,DOMI帮助训练强大的模型,以抵抗来自域侧和对象端的虚假相关性,从而大大提高了旋转的MNIST,旋转的时尚MNIST和IWILDCAM数据集对主链DG算法的性能。
translated by 谷歌翻译
随着无线标准的发展,引入了更复杂的功能,以解决吞吐量,延迟,安全性和效率方面的增加。为了释放此类新功能的潜力,目前正在利用人工智能(AI)和机器学习(ML)(ML)来从数据中得出模型和协议,而不是通过手工编程。在本文中,我们探讨了将ML应用于下一代无线局域网(WLAN)的可行性。更具体地说,我们专注于IEEE 802.11AX空间重用(SR)问题,并通过联合学习(FL)模型来预测其性能。在这项工作中概述的FL解决方案集是2021年国际电信联盟(ITU)AI的5G挑战赛的一部分。
translated by 谷歌翻译
已经开发了各种深度学习模型,以从医学图像分段解剖结构,但它们通常在具有不同数据分布的另一个目标域上测试时具有差的性能。最近,已经提出了未经监督的域适应方法来缓解这种所谓的域移位问题,但大多数都是针对具有相对较小域移位的方案设计的,并且在遇到大域间隙时可能会失败。在本文中,我们提出DCDA,一种新的跨模型无监督域适应框架,用于具有大域移位的任务,例如,来自Octa和OCT图像的分段视网膜血管。 DCDA主要包括解开表示样式转移(DRST)模块和协作一致性学习(CCL)模块。 DRST将图像分解成内容组件和样式代码,并执行样式传输和图像重建。 CCL包含两个分段模型,一个用于源域,另一个用于目标域。这两种模型使用标记的数据(与相应的传输图像一起)进行监督学习,并在未标记的数据上执行协作一致性学习。每个模型都侧重于相应的单个域,并旨在产生专用域特定的分段模型。通过对视网膜船分割的广泛实验,我们的框架从Octa到Oct和Oct到Octa的OctA到Octa的骰子分数均达到目标培训的甲骨文,显着优于其他最先进的方法。
translated by 谷歌翻译
青光眼是可能导致盲目的眼科疾病之一,早期检测和治疗非常重要。眼底图像和光学相干性断层扫描(OCT)图像均为广泛使用的诊断青光眼的方式。然而,现有的青光眼分级方法主要利用单一的方式,忽略眼底和OCT之间的互补信息。在本文中,我们提出了一个有效的多种式监督对比的对比学习框架,名为Corolla,用于青光眼分级。通过层分割以及厚度计算和投影,从原始OCT卷中提取视网膜厚度图,并用作更换的模态,导致更有效的计算,内存使用较少。鉴于医学图像样本的高结构和分布相似之处,我们采用了监督的对比学习,以提高模型的歧视力,更好地融合。此外,对成对的眼底图像和厚度图的特征级融合以提高诊断精度。在Gamma DataSet上,与最先进的方法相比,我们的Corolla框架达到了压倒性的青光眼分级性能。
translated by 谷歌翻译